Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 91: 102984, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37837690

RESUMO

The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Algoritmos , Pescoço , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos , Planejamento da Radioterapia Assistida por Computador/métodos
2.
Med Image Anal ; 91: 102998, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37857066

RESUMO

Radiotherapy serves as a pivotal treatment modality for malignant tumors. However, the accuracy of radiotherapy is significantly compromised due to respiratory-induced fluctuations in the size, shape, and position of the tumor. To address this challenge, we introduce a deep learning-anchored, volumetric tumor tracking methodology that employs single-angle X-ray projection images. This process involves aligning the intraoperative two-dimensional (2D) X-ray images with the pre-treatment three-dimensional (3D) planning Computed Tomography (CT) scans, enabling the extraction of the 3D tumor position and segmentation. Prior to therapy, a bespoke patient-specific tumor tracking model is formulated, leveraging a hybrid data augmentation, style correction, and registration network to create a mapping from single-angle 2D X-ray images to the corresponding 3D tumors. During the treatment phase, real-time X-ray images are fed into the trained model, producing the respective 3D tumor positioning. Rigorous validation conducted on actual patient lung data and lung phantoms attests to the high localization precision of our method at lowered radiation doses, thus heralding promising strides towards enhancing the precision of radiotherapy.


Assuntos
Aprendizado Profundo , Neoplasias , Humanos , Imageamento Tridimensional/métodos , Raios X , Tomografia Computadorizada por Raios X/métodos , Neoplasias/diagnóstico por imagem , Neoplasias/radioterapia , Tomografia Computadorizada de Feixe Cônico/métodos
3.
Bioengineering (Basel) ; 10(11)2023 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-38002438

RESUMO

The detection of Coronavirus disease 2019 (COVID-19) is crucial for controlling the spread of the virus. Current research utilizes X-ray imaging and artificial intelligence for COVID-19 diagnosis. However, conventional X-ray scans expose patients to excessive radiation, rendering repeated examinations impractical. Ultra-low-dose X-ray imaging technology enables rapid and accurate COVID-19 detection with minimal additional radiation exposure. In this retrospective cohort study, ULTRA-X-COVID, a deep neural network specifically designed for automatic detection of COVID-19 infections using ultra-low-dose X-ray images, is presented. The study included a multinational and multicenter dataset consisting of 30,882 X-ray images obtained from approximately 16,600 patients across 51 countries. It is important to note that there was no overlap between the training and test sets. The data analysis was conducted from 1 April 2020 to 1 January 2022. To evaluate the effectiveness of the model, various metrics such as the area under the receiver operating characteristic curve, receiver operating characteristic, accuracy, specificity, and F1 score were utilized. In the test set, the model demonstrated an AUC of 0.968 (95% CI, 0.956-0.983), accuracy of 94.3%, specificity of 88.9%, and F1 score of 99.0%. Notably, the ULTRA-X-COVID model demonstrated a performance comparable to conventional X-ray doses, with a prediction time of only 0.1 s per image. These findings suggest that the ULTRA-X-COVID model can effectively identify COVID-19 cases using ultra-low-dose X-ray scans, providing a novel alternative for COVID-19 detection. Moreover, the model exhibits potential adaptability for diagnoses of various other diseases.

4.
Healthcare (Basel) ; 11(22)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37998461

RESUMO

The COVID-19 pandemic continues to affect the world. Wuhan, the epicenter of the outbreak, underwent a 76-day lockdown. Research has indicated that the lockdown negatively impacted the quality of life of older individuals, but little is known about their specific experiences during the confinement period. Qualitative interviews were conducted with 20 elderly residents of Wuhan, aged 65 to 85, who experienced mandatory isolation throughout the pandemic. The interviews centered around three stages of experiences: the Early Lockdown stage (the first week of lockdown after the government implemented the lockdown policy in January 2020), Infection During Lockdown stage (from February to April 2020 when participants were affected by the lockdown), and the Post-Lockdown stage (after April 2020 when the government lifted the lockdown policy). We found that older adults experienced different core themes during each lockdown stage. In the Early Lockdown stage, they felt nervousness and fear while searching for information. During the Lockdown and Infection Stage, they relied on reciprocal support and adjusted to new lifestyles. In the Post-Lockdown stage, they expressed cautions, trust, and gratitude. The finding highlights the evolving emotions and coping strategies of older adults throughout the lockdown phases. This study has yielded valuable insights into the adaptations of behavior and the importance of social interactions, specifically emphasizing the significance of healthcare among the elderly population.

5.
Phys Med Biol ; 68(20)2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37714184

RESUMO

Objective.Computed tomography (CT) is a widely employed imaging technology for disease detection. However, CT images often suffer from ring artifacts, which may result from hardware defects and other factors. These artifacts compromise image quality and impede diagnosis. To address this challenge, we propose a novel method based on dual contrast learning image style transformation network model (DCLGAN) that effectively eliminates ring artifacts from CT images while preserving texture details.Approach. Our method involves simulating ring artifacts on real CT data to generate the uncorrected CT (uCT) data and transforming them into strip artifacts. Subsequently, the DCLGAN synthetic network is applied in the polar coordinate system to remove the strip artifacts and generate a synthetic CT (sCT). We compare the uCT and sCT images to obtain a residual image, which is then filtered to extract the strip artifacts. An inverse polar transformation is performed to obtain the ring artifacts, which are subtracted from the original CT image to produce a corrected image.Main results.To validate the effectiveness of our approach, we tested it using real CT data, simulated data, and cone beam computed tomography images of the patient's brain. The corrected CT images showed a reduction in mean absolute error by 12.36 Hounsfield units (HU), a decrease in root mean square error by 18.94 HU, an increase in peak signal-to-noise ratio by 3.53 decibels (dB), and an improvement in structural similarity index by 9.24%.Significance.These results demonstrate the efficacy of our method in eliminating ring artifacts and preserving image details, making it a valuable tool for CT imaging.

6.
Comput Biol Med ; 165: 107377, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37651766

RESUMO

PURPOSE: Cone-beam computed tomography (CBCT) is widely utilized in modern radiotherapy; however, CBCT images exhibit increased scatter artifacts compared to planning CT (pCT), compromising image quality and limiting further applications. Scatter correction is thus crucial for improving CBCT image quality. METHODS: In this study, we proposed an unsupervised contrastive learning method for CBCT scatter correction. Initially, we transformed low-quality CBCT into high-quality synthetic pCT (spCT) and generated forward projections of CBCT and spCT. By computing the difference between these projections, we obtained a residual image containing image details and scatter artifacts. Image details primarily comprise high-frequency signals, while scatter artifacts consist mainly of low-frequency signals. We extracted the scatter projection signal by applying a low-pass filter to remove image details. The corrected CBCT (cCBCT) projection signal was obtained by subtracting the scatter artifacts projection signal from the original CBCT projection. Finally, we employed the FDK reconstruction algorithm to generate the cCBCT image. RESULTS: To evaluate cCBCT image quality, we aligned the CBCT and pCT of six patients. In comparison to CBCT, cCBCT maintains anatomical consistency and significantly enhances CT number, spatial homogeneity, and artifact suppression. The mean absolute error (MAE) of the test data decreased from 88.0623 ± 26.6700 HU to 17.5086 ± 3.1785 HU. The MAE of fat regions of interest (ROIs) declined from 370.2980 ± 64.9730 HU to 8.5149 ± 1.8265 HU, and the error between their maximum and minimum CT numbers decreased from 572.7528 HU to 132.4648 HU. The MAE of muscle ROIs reduced from 354.7689 ± 25.0139 HU to 16.4475 ± 3.6812 HU. We also compared our proposed method with several conventional unsupervised synthetic image generation techniques, demonstrating superior performance. CONCLUSIONS: Our approach effectively enhances CBCT image quality and shows promising potential for future clinical adoption.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico , Humanos , Tomografia Computadorizada de Feixe Cônico/métodos , Artefatos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Espalhamento de Radiação
7.
Comput Biol Med ; 161: 106888, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37244146

RESUMO

X-ray Computed Tomography (CT) techniques play a vitally important role in clinical diagnosis, but radioactivity exposure can also induce the risk of cancer for patients. Sparse-view CT reduces the impact of radioactivity on the human body through sparsely sampled projections. However, images reconstructed from sparse-view sinograms often suffer from serious streaking artifacts. To overcome this issue, we propose an end-to-end attention-based mechanism deep network for image correction in this paper. Firstly, the process is to reconstruct the sparse projection by the filtered back-projection algorithm. Next, the reconstructed results are fed into the deep network for artifact correction. More specifically, we integrate the attention-gating module into U-Net pipelines, whose function is implicitly learning to emphasize relevant features beneficial for a given assignment while restraining background regions. Attention is used to combine the local feature vectors extracted at intermediate stages in the convolutional neural network and the global feature vector extracted from the coarse scale activation map. To improve the performance of our network, we fused a pre-trained ResNet50 model into our architecture. The model was trained and tested using the dataset from The Cancer Imaging Archive (TCIA), which consists of images of various human organs obtained from multiple views. This experience demonstrates that the developed functions are highly effective in removing streaking artifacts while preserving structural details. Additionally, quantitative evaluation of our proposed model shows significant improvement in peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean squared error (RMSE) metrics compared to other methods, with an average PSNR of 33.9538, SSIM of 0.9435, and RMSE of 45.1208 at 20 views. Finally, the transferability of the network was verified using the 2016 AAPM dataset. Therefore, this approach holds great promise in achieving high-quality sparse-view CT images.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos
8.
Front Oncol ; 13: 1127866, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36910636

RESUMO

Objective: To develop a contrast learning-based generative (CLG) model for the generation of high-quality synthetic computed tomography (sCT) from low-quality cone-beam CT (CBCT). The CLG model improves the performance of deformable image registration (DIR). Methods: This study included 100 post-breast-conserving patients with the pCT images, CBCT images, and the target contours, which the physicians delineated. The CT images were generated from CBCT images via the proposed CLG model. We used the Sct images as the fixed images instead of the CBCT images to achieve the multi-modality image registration accurately. The deformation vector field is applied to propagate the target contour from the pCT to CBCT to realize the automatic target segmentation on CBCT images. We calculate the Dice similarity coefficient (DSC), 95 % Hausdorff distance (HD95), and average surface distance (ASD) between the prediction and reference segmentation to evaluate the proposed method. Results: The DSC, HD95, and ASD of the target contours with the proposed method were 0.87 ± 0.04, 4.55 ± 2.18, and 1.41 ± 0.56, respectively. Compared with the traditional method without the synthetic CT assisted (0.86 ± 0.05, 5.17 ± 2.60, and 1.55 ± 0.72), the proposed method was outperformed, especially in the soft tissue target, such as the tumor bed region. Conclusion: The CLG model proposed in this study can create the high-quality sCT from low-quality CBCT and improve the performance of DIR between the CBCT and the pCT. The target segmentation accuracy is better than using the traditional DIR.

9.
Comput Biol Med ; 155: 106710, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36842222

RESUMO

PURPOSE: Metal artifacts can significantly decrease the quality of computed tomography (CT) images. This occurs as X-rays penetrate implanted metals, causing severe attenuation and resulting in metal artifacts in the CT images. This degradation in image quality can hinder subsequent clinical diagnosis and treatment planning. Beam hardening artifacts are often manifested as severe strip artifacts in the image domain, affecting the overall quality of the reconstructed CT image. In the sinogram domain, metal is typically located in specific areas, and image processing in these regions can preserve image information in other areas, making the model more robust. To address this issue, we propose a region-based correction of beam hardening artifacts in the sinogram domain using deep learning. METHODS: We present a model composed of three modules: (a) a Sinogram Metal Segmentation Network (Seg-Net), (b) a Sinogram Enhancement Network (Sino-Net), and (c) a Fusion Module. The model starts by using the Attention U-Net network to segment the metal regions in the sinogram. The segmented metal regions are then interpolated to obtain a sinogram image free of metal. The Sino-Net is then applied to compensate for the loss of organizational and artifact information in the metal regions. The corrected metal sinogram and the interpolated metal-free sinogram are then used to reconstruct the metal CT and metal-free CT images, respectively. Finally, the Fusion Module combines the two CT images to produce the result. RESULTS: Our proposed method shows strong performance in both qualitative and quantitative evaluations. The peak signal-to-noise ratio (PSNR) of the CT image before and after correction was 18.22 and 30.32, respectively. The structural similarity index measure (SSIM) improved from 0.75 to 0.99, and the weighted peak signal-to-noise ratio (WPSNR) increased from 21.69 to 35.68. CONCLUSIONS: Our proposed method demonstrates the reliability of high-accuracy correction of beam hardening artifacts.


Assuntos
Artefatos , Aprendizado Profundo , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Metais , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA